Skip to main content

Continuous Processes

In previous section, we discussed the discrete-time processes. We now define a continuous time stochastic process XtX_t has Markov property if P(Xt=yXr,0rs)=P(Xt=yXs)P(X_t = y| X_r, 0 \le r \le s) = P(X_t = y| X_s) for all tst \ge s and Time-homogeneous if P(Xt+s=yXs=x)=P(Xt=yX0=x)P(X_{t+s} = y| X_s = x) = P(X_t = y| X_0 = x).

It still follows Chapman-Kolmogorov equation: pxy(s+t)=zSpxz(s)pzy(t)p^{(s+t)}_{xy} = \sum_{z\in S} p^{(s)}_{xz} p^{(t)}_{zy}.

Brownian Motion

We define Brownian motion is a continuous time-process {Bt:t0}\{B_t: t\ge 0\} satiisfying the properties that:

  • B0=0B_0 = 0
  • BtN(0,t)B_t \sim N(0,t)
  • It has indepedent normal increment. That is, if 0t1s1t2s2tksk0 \le t_1 \le s_1 \le t_2 \le s_2 \le \ldots \le t_k \le s_k, then BsiBtiN(0,siti)B_{s_i} - B_{t_i} \sim N(0, s_i - t_i), and {BsiBti:1ik}\{B_{s_i} - B_{t_i}: 1 \le i \le k\} are independent.
  • Cov(Bt,Bs)=min(t,s)Cov(B_t, B_s) = \min(t,s)
  • Sample paths are continuous (i.e. tBtt \to B_t is continuous)
  • {Bt}\{B_t\} is a continuous-time martingale

Propostion: Yt=Bt2t    YtY_t = B^2_t - t \implies Y_t is a martingale.

Def: Xt=x0+μt+σBtX_t = x_0 + \mu t + \sigma B_t where x0,μ,σ0x_0, \mu, \sigma \ge 0 are constants is called diffusion.

  • E[Xt]=x0+μtE[X_t] = x_0 + \mu t, x0x_0 is the initial value and μ\mu is the drift
  • Var[Xt]=σ2tVar[X_t] = \sigma^2 t, σ\sigma is the volatility
  • Cov[Xt,Xs]=σ2min(t,s)Cov[X_t, X_s] = \sigma^2 \min(t,s)
  • XtN(x0+μt,σ2t)X_t\sim N(x_0 + \mu t, \sigma^2 t)

Poisson Process

A (homogeneous) Poisson process with intensity λ>0\lambda > 0 is a collection of {Nt:t0}\{N_t: t \ge 0\} of non-decreasing integer-valued random variables that satisfies the following properties:

  • N0=0N_0 = 0
  • 0t1tn    Nt1,Nt2Nt1,,NtnNtn10\le t_1 \le \ldots \le t_n \implies N_{t_1}, N_{t_2} - N_{t_1}, \ldots, N_{t_n} - N_{t_{n-1}} are independent and NtiNti1Poisson(λ(titi1))N_{t_i} - N_{t_{i-1}} \sim Poisson(\lambda (t_i - t_{i-1}))

That is, Nt=NtN0Poi(λt)N_t = N_t - N_0 \sim Poi(\lambda t), then P(Nt=j)=(λt)jj!eλtP(N_t = j) = \frac{(\lambda t)^j}{j!} e^{-\lambda t}, and further we have:

  • E[Nt]=λtE[N_t] = \lambda t and Var[Nt]=λtVar[N_t] = \lambda t, mgf MNi(t)=eλt(et1)M_{N_i}(t) = e^{\lambda t(e^{-t} - 1)}
  • Ni+NjPoi(λ(ti+tj))N_i + N_j \sim Poi(\lambda (t_i + t_j)), and Nt1++NtnPoi(λ(t1++tn))N_{t_1} + \ldots + N_{t_n} \sim Poi(\lambda (t_1 + \ldots + t_n))
  • For different Poisson processes, combine them as {Ni,t:t0}\{N_{i,t}: t \ge 0\} with intensity λi\lambda_i respectively, then the superposition property holds: {i=1kNi,t:t0}\{\sum_{i=1}^k N_{i,t}: t \ge 0\} has intensity λ=i=1kλi\lambda = \sum_{i=1}^k \lambda_i.

If some process XtBin(n,pn)X_t \sim Bin(n, p_n) where pn=λtn+o(n)p_n = \frac{\lambda t}{n} + o(n), then XtPoi(λt)X_t \sim Poi(\lambda t) as nn \to \infty.

A poisson process has the following properties:

  • P(Nt+ΔNt=1)=λΔ+o(Δ)P(N_{t + \Delta} - N_t = 1) = \lambda \Delta + o(\Delta)
  • P(Nt+ΔNt2)=o(Δ)P(N_{t + \Delta} - N_t \ge 2) = o(\Delta)

Stoping Time for Poisson Process

If TT is a finite stopping time for {Nt:t0}\{N_t: t \ge 0\}, then {NT+tNt:t0}\{N_{T+t}-N_t: t \ge 0\} is a Poisson process with intensity λT\lambda T which is independent of {Ns:sT}\{N_s: s \le T\}.

Def: Ti=inf{t:NTi1+tNTi1>0}T_i = \inf\{t: N_{T_{i-1} + t} - N_{T_{i-1}} > 0\} for i1i \ge 1 is called interarrival time. And TiT_i are i.i.d from Exp(λ)Exp(\lambda).

Def: Sn=T1++TnS_n = T_1 + \ldots + T_n is the arrival time for the nnth event and follows Gamma(n,λ)Gamma(n, \lambda). That is P(Sn>x)=k=0n1(λx)kk!eλxP(S_n > x) = \sum_{k = 0}^{n-1} \frac{(\lambda x)^k}{k!} e^{-\lambda x}.

For time Sit<Si+1S_i \le t < S_{i + 1}, we havee Nt=iN_t = i. And note that E[SN]=NλE[S_N] = \frac{N}{\lambda}, and Var[SN]=Nλ2Var[S_N] = \frac{N}{\lambda^2}.

Suppose {Nt:t0}\{N_t: t \ge 0\} is a Poisson process with intensity λ\lambda and each arrival is labeled ii with probability pip_i, where ipi=1\sum_{i} p_i = 1 and let {Ni,t:t0}\{N_{i,t}: t \ge 0\} denote the preocess counting the number of arrivals labeled ii. Then {Ni,t}\{N_{i,t}\} are independent Poisson processes with intensity λpi\lambda p_i and the processes are mutually independent.

Continuous-Time, Discrete-Space Processes

Def: Let {Xt:t0}\{X_t: t\ge 0\} be a collection of discrete random variables taking values in state space SS and that evolves in time tt. {Xt}\{X_t\} is called a continuous-time Markov chain if:

  • the time of XtX_t change follows exponential distribution with parameter λ\lambda
  • when state x leaves, a new state yxy\ne x is chosen according to the transition probabilities of a discrete-time Markov chain

That is, {Xt:t0}\{X_t: t\ge 0\} is composed of a discrete-time Markov chain for the transitions, the jump chain, and exponential distribution for the holding times. The λx\lambda_x are called holding-time parameters.

Transition Probabilities

Denote x,ySx,y\in S and α(x,y)\alpha(x,y) presents the probability of transition from xx to yy. then we have α(x)=yxα(x,y)\alpha(x) = \sum_{y \ne x} \alpha(x,y).

Recall the time-homogeneous P(Xt+s=yXs=x)=P(Xt=yX0=x)P(X_{t+s} = y| X_s = x) = P(X_t = y| X_0 = x), now, for a small time change Δt\Delta t, we have P(Xt+Δt=xXt=y)=α(y,x)Δt+o(Δt)P(X_{t+\Delta t} = x| X_t = y) = \alpha(y,x)\Delta t + o(\Delta t) where o(Δt)o(\Delta t) is some function of Δt\Delta t satisfying limΔt0o(Δt)/Δt=0\lim_{\Delta t \to 0} o(\Delta t)/\Delta t = 0.

  • Notice y=x    α(x,x)Δt=1α(x)Δty = x \implies \alpha(x,x)\Delta t = 1 - \alpha(x) \Delta t

Denote px(t)=P(Xt=x)p_x(t) = P(X_t = x), then we have :

px(t)=ySP(Xt=x,XtΔt=y)=ySP(Xt+Δt=xXt=y)P(Xt=y)=yS(α(y,x)Δt+o(Δt))py(t)=(1α(x)Δt+o(Δt))py(t)+yx[α(y,x)Δt+o(Δt)]py(t)p_x(t) = \sum_{y\in S} P(X_t = x, X_{t-\Delta t} = y) \\ = \sum_{y\in S} P(X_{t+\Delta t} = x| X_{t} = y) P(X_{t} = y) \\ = \sum_{y\in S} (\alpha(y,x)\Delta t + o(\Delta t)) p_y(t) \\ = (1 - \alpha(x) \Delta t + o(\Delta t))p_y(t) + \sum_{y\ne x} [\alpha(y,x)\Delta t + o(\Delta t)] p_y(t)

to see the rate of change of px(t)p_x(t), we have: dpx(t)dΔt=α(x)px(t)+yxα(y,x)py(t)\frac{d p_x(t)}{d \Delta t} = -\alpha(x) p_x(t) + \sum_{y\ne x} \alpha(y,x) p_y(t).

Since discrete-Space state, then for each transition probability xy,α(x,y)x\ne y, \alpha(x,y) and x=y,α(x)x = y, -\alpha(x) we can use a matrix GG to present. That is, p(t)=p(t)Gp'(t) = p(t)G or G=P(0)=limt0P(t)ItG = P'(0) = \lim_{t\to 0}\frac{P^{(t) - I}}{t}. Such GG is called infinitesimal generator of the Markov chain. And plug in the given px(0)p_x(0) or p(0)p(0) the vector form, we can obtain the solution p(t)=p(0)etGp(t) = p(0) e^{tG}.

Let X0=xX_0 = x, T=inf{t:Xtx}T = \inf\{t: X_t \ne x\}, then P(TΔt)=a(x)Δt+o(Δt)P(T \le \Delta t) = a(x) \Delta t + o(\Delta t). Recall interarrival time follows exponential distribution, here, with parameter a(x)a(x).

Let pxy(r)=P(Xt=yX0=x)p_{xy}^{(r)} = P(X_t = y| X_0 = x), then we have P(t)P^{(t)} is the transition probability matrix of the Markov chain with enteires (x,y)=pxy(t)(x,y) = p_{xy}^{(t)}. From previous result, P(t)=P(t)GP'(t) = P(t)G and P(0)=IP(0) = I. Recall the decomposition D=Q1GQD = Q^{-1}GQ, then we have P(t)=etG=QetDQ1P^{(t)} = e^{tG} = Q e^{tD} Q^{-1}.

Therefore, the transition probability pxy=α(x,y)/α(x)p_{xy} = \alpha(x,y)/\alpha(x), holding-time parameter λx=α(x)\lambda_x = \alpha(x), and expected holding time E[Tx]=1/λxE[T_x] = 1/\lambda_x.

Convergence

Def: A probability distribution π\pi is called a stationary distribution of a continuous-time Markov chian iff the transition probabilities matrix πP(t)=π,t0\pi P^{(t)} = \pi, \forall t \ge 0.

  • πG=0\pi G = 0

Recurrence

Def: Sx=inf{t:Xt=x}S_x = \inf \{t: X_t = x\}, that is, is the sum of the holding time in all states visited before reaching xx. Denote τx\tau_x for the return times in the jump chain, then we have Sx=n=0τx1TnS_x = \sum_{n = 0}^{\tau_x - 1} T_{n} where TnT_n is the holding time in state nn.

If Px(Sx<)=1P_x(S_x < \infty) = 1, then XtX_t is called recurrent.

  • Ex[Sx]<E_x[S_x] < \infty then xx is called positive recurrent.
  • otherwise, xx is called null recurrent.
  • Then τx<\tau_x < \infty and Tn<T_n < \infty for all 0nτx10 \le n \le \tau_x - 1.

If Px(Sx<)<1P_x(S_x < \infty) <1, then XtX_t is called transient.

Notice: continuous-time lead no period property exists.

Continuous-time Success-run chain application

Let {Xn}\{X_n\} be the jump chain, then this chain is positive reucrrent where S0=k=1τ01TkS_0 = \sum_{k=1}^{\tau_0 - 1} T_k, n<τ0    Xn=nn < \tau_0 \implies X_n = n (i.e. finite visit time) with stationary distribution(since irreducible).

Since {Xn}\{X_n\} is recurrent, then {Xn}\{X_n\} is also recurrent but without guarantee of positive recurrence since tt is continuous.

The holding time parameters of the success-run chain are λ(k)=12k,k0\lambda(k) = \frac{1}{2^k}, k \ge 0, that is the expected holding time is E[Tn]=1λn=2nE[T_n] = \frac{1}{\lambda_{n}} = 2^n.

Now, E0[S0]=kE[S0τ0=k]P0(τ0=k)=kn=0k1E[Tn]2k1=k2k12k1=E_0[S_0] = \sum_k E[S_0|\tau_0 = k]P_0(\tau_0 = k) = \sum_k\sum_{n = 0}^{k-1}\frac{E[T_n]}{2^{k - 1}}= \sum_k \frac{2^k-1}{2^{k - 1}} = \infty.

That is, {Xt}\{X_t\} is null recurrent.

Proposition

Consider an irreducible continuous-time Markov chain with a recurrent jump chain. Then a stationary distribution π\pi exists iff {Xt}\{X_t\} is positive recurrent. The stationary distribution is unique and has π(x)>0,xS\pi(x) > 0, \forall x\in S.

Theorem

Consider an irreducible continuous-time Markov chain with a recurrent jump chain, a stationary distribution π\pi and transition probabilities pxy(t)p_{xy}^{(t)} (i.e. ergodic). Then limtpxy(t)=π(y),x,yS\lim_{t\to \infty} p_{xy}^{(t)} = \pi(y), \forall x,y \in S

Queuing Theory

Def: A M/M/1 queue is a queue {Qt:t0}\{Q_t: t\ge 0\} with interarrival times i.i.d from Exp(λ)(\lambda) and service times i.i.d from Exp(μ)(\mu) with generator matrix G=[λλ000μ(λ+μ)λ000μ(λ+μ)λ0]G = \begin{bmatrix} -\lambda & \lambda & 0 & 0 & 0 & \cdots \\ \mu & -(\lambda + \mu) & \lambda & 0 & 0 & \cdots \\ 0 & \mu & -(\lambda + \mu) & \lambda & 0 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix}. (QtQ_t be the number in the queue at time tt)

  • G/G/1 will assume both interarrival and service times are i.i.d. but not necessarily from Exp(λ)(\lambda) and Exp(μ)(\mu).
  • M/M/1 is a homogeneous Markov Process.

Theorem

λ<μ    π\lambda < \mu \implies \pi \sim Geometric(1λμ)(1-\frac{\lambda}{\mu}) (i.e. limkP(Qt=k)=(λμ)k(1λμ)\lim_{k\to \infty} P(Q_t = k) = (\frac{\lambda}{\mu})^k(1 - \frac{\lambda}{\mu}))

λ>μ    \lambda > \mu \implies no stationary distribution exists. (since the jump chain is the simple random walk)

Renewal Theory

Def: The process {Nt:tT}\{N_t: t \in T\} where Nt=#{n:Tnt}N_t = \#\{n:T_n \le t\} is the number of renewals up to time tt is called a renewal process.

  • The maintenance time is Tn=k=0nYkT_n = \sum_{k = 0}^{n} Y_k and T0=0T_0 = 0 (YkY_k is i.i.d.)
  • YkY_k follows exponential distribution, then this renewal process also is a Poisson process.
  • Only when it's a Poisson process, it's a Markov process.

Elementary Renewal Theorem

For a renewal process {Nt:tT}\{N_t: t \in T\} with E[Yn]=μ<,n2E[Y_n] = \mu < \infty, \forall n\ge 2, then:

  • limtNtt=1μ\lim_{t\to \infty}\frac{N_t}{t} = \frac{1}{\mu}
  • limtE[Ntt]=1μ\lim_{t\to \infty}E[\frac{N_t}{t}] = \frac{1}{\mu}

Blackwell Renewal Theorem

For a renewal process {Nt:tT}\{N_t: t \in T\} with E[Y2]=μ<E[Y_2] = \mu < \infty, λ>0\nexists \lambda > 0 such that P(Y2=kλ for some kZ)=1P(Y_2 = k\lambda \text{ for some }k\in\Z) = 1, then h>0,limtE[Nt+hNt]=hμ\forall h > 0, \lim_{t\to \infty}E[N_{t+h} - N_t] = \frac{h}{\mu}

In particular, if hh is small enough that P(Yn<h)=0P(Y_n < h) = 0, then limtP(n:t<Tn<t+h)=limtP[Nt+hNt1]=hμ\lim_{t\to \infty}P(\exists n: t < T_n < t + h) = \lim_{t\to \infty}P[N_{t+h} - N_t \ge 1] = \frac{h}{\mu}

Theorem

For a renewal process {Nt:tT}\{N_t: t \in T\} with E[Y2]=μ<E[Y_2] = \mu < \infty with rewards(or costs) RiR_i at renewal ii are i.i.d and Rt=i=1NtRiR_t = \sum_{i = 1}^{N_t} R_i, then Rt/tE[R1]/μR_t/t \to E[R_1]/\mu with probability 1.